44 research outputs found
Generate To Adapt: Aligning Domains using Generative Adversarial Networks
Domain Adaptation is an actively researched problem in Computer Vision. In
this work, we propose an approach that leverages unsupervised data to bring the
source and target distributions closer in a learned joint feature space. We
accomplish this by inducing a symbiotic relationship between the learned
embedding and a generative adversarial network. This is in contrast to methods
which use the adversarial framework for realistic data generation and
retraining deep models with such data. We demonstrate the strength and
generality of our approach by performing experiments on three different tasks
with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and
USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain
adaptation from synthetic to real data. Our method achieves state-of-the art
performance in most experimental settings and by far the only GAN-based method
that has been shown to work well across different datasets such as OFFICE and
DIGITS.Comment: Accepted as spotlight talk at CVPR 2018. Code available here:
https://github.com/yogeshbalaji/Generate_To_Adap
Robust Learning under Distributional Shifts
Designing robust models is critical for reliable deployment of artificial intelligence systems. Deep neural networks perform exceptionally well on test samples that are drawn from the same distribution as the training set. However, they perform poorly when there is a mismatch between training and test conditions, a phenomenon called distributional shift. For instance, the perception system of a self-driving car can produce erratic predictions when it encounters a new test sample with a different illumination or weather condition not seen during training. Such inconsistencies are undesirable, and can potentially create life-threatening conditions as these models are deployed in safety-critical applications.
In this dissertation, we develop several techniques for effectively handling distributional shifts in deep learning systems.
In the first part of the dissertation, we focus on detecting out-of-distribution shifts that can be used for flagging outlier samples at test-time. We develop a likelihood estimation framework based on deep generative models for this task. In the second part, we study the domain adaptation problem where the objective is to tune the neural network models to adapt to a specific target distribution of interest. We design novel adaptation algorithms, understand and analyze them under various settings. In the last part of the dissertation, we develop robust learning algorithms that can generalize to novel distributional shifts. In particular, we focus on two types of shifts - covariate and adversarial shifts. All developed algorithms are rigorously evaluated on several benchmark datasets
Winning Lottery Tickets in Deep Generative Models
The lottery ticket hypothesis suggests that sparse, sub-networks of a given
neural network, if initialized properly, can be trained to reach comparable or
even better performance to that of the original network. Prior works in lottery
tickets have primarily focused on the supervised learning setup, with several
papers proposing effective ways of finding "winning tickets" in classification
problems. In this paper, we confirm the existence of winning tickets in deep
generative models such as GANs and VAEs. We show that the popular iterative
magnitude pruning approach (with late rewinding) can be used with generative
losses to find the winning tickets. This approach effectively yields tickets
with sparsity up to 99% for AutoEncoders, 93% for VAEs and 89% for GANs on
CIFAR and Celeb-A datasets. We also demonstrate the transferability of winning
tickets across different generative models (GANs and VAEs) sharing the same
architecture, suggesting that winning tickets have inductive biases that could
help train a wide range of deep generative models. Furthermore, we show the
practical benefits of lottery tickets in generative models by detecting tickets
at very early stages in training called "early-bird tickets". Through
early-bird tickets, we can achieve up to 88% reduction in floating-point
operations (FLOPs) and 54% reduction in training time, making it possible to
train large-scale generative models over tight resource constraints. These
results out-perform existing early pruning methods like SNIP (Lee, Ajanthan,
and Torr 2019) and GraSP (Wang, Zhang, and Grosse 2020). Our findings shed
light towards existence of proper network initializations that could improve
convergence and stability of generative models.Comment: Published at AAAI 202